Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: Managed the import of torch.amp to be compatible with all pytorch versions #13487

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

paraglondhe098
Copy link

@paraglondhe098 paraglondhe098 commented Jan 10, 2025

I have read the CLA Document and I sign the CLA

Changes made

While I was training YOLO V5, I encountered warning :

FutureWarning: `torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.
  with torch.cuda.amp.autocast(amp)

To address this, I updated the import in train.py using a try-except block for compatibility with all PyTorch versions in requirements.txt:

try:
    import torch.amp as amp
except ImportError:
    import torch.cuda.amp as amp

Additionally, the variable amp (a boolean indicating whether to use automated precision/mixed precision training) was renamed to use_amp for clarity, since amp is also the module name.

🛠️ PR Summary

Made with ❤️ by Ultralytics Actions

🌟 Summary

Improves AMP (Automatic Mixed Precision) integration with enhanced compatibility and functionality.

📊 Key Changes

  • 📥 Added fallback to torch.cuda.amp if torch.amp is not available (ensures compatibility across PyTorch versions).
  • 🔄 Replaced the use of amp variable with use_amp for better clarity and consistency.
  • 🛠 Updated AMP-related functionalities, including gradient scaling (GradScaler) and automatic casting (autocast), for seamless device type support (e.g., CPU, GPU).
  • 🧪 Modified AMP-related parameters in batch size estimation and validation logic.

🎯 Purpose & Impact

  • Improved compatibility: Ensures the training script works with older versions of PyTorch that lack torch.amp.
  • 🤖 Enhanced functionality: Supports mixed precision training across different device types (e.g., CPUs, GPUs).
  • 🚀 Better training performance: Enables more efficient use of hardware, potentially speeding up training while reducing memory usage.
  • 🌎 Future-proofing: Adapts to newer PyTorch features in a backward-compatible way, benefiting a broader range of users.

Copy link
Contributor

github-actions bot commented Jan 10, 2025

All Contributors have signed the CLA. ✅
Posted by the CLA Assistant Lite bot.

@UltralyticsAssistant UltralyticsAssistant added dependencies Dependencies and packages enhancement New feature or request labels Jan 10, 2025
@UltralyticsAssistant
Copy link
Member

👋 Hello @paraglondhe098, thank you for submitting a 🚀 PR to ultralytics/yolov5! This is an automated response to help streamline the review process. An Ultralytics engineer will review your contribution soon. In the meantime, please ensure the following checklist is complete:

  • Purpose and Description: Your PR addresses AMP (Automatic Mixed Precision) compatibility warnings across PyTorch versions in train.py. Great work documenting the purpose with details and linking it to a specific warning! If there's a related issue in the repo, consider mentioning or linking it for context.
  • Synchronize with main: Ensure your PR is up-to-date with the ultralytics/yolov5 main branch. If it's behind, update it locally using git pull or by clicking the "Update branch" button on GitHub.
  • Continuous Integration (CI): Confirm all CI checks have passed. You can explore the CI framework used in our documentation. 🚦 If there are any failures, please address them for seamless merging.
  • Documentation and Clarity: You've clearly renamed variables for clarity (e.g., amp ➡️ use_amp), and your detailed comments and organized changes will benefit all users. If further documentation updates for this feature are required, ensure to log them in Ultralytics Docs.
  • Testing: If there are edge cases or additional code paths impacted by your changes, include or update relevant tests. Also, verify that all existing and new tests pass.
  • Contributor License Agreement (CLA): Please confirm you've signed the CLA (and you already referenced that you've done this - awesome! 🎉). For new contributors, include this line in the comments: "I have read the CLA Document and I sign the CLA."
  • Minimize Changes: Keep changes as focused as possible on AMP-related fixes to maintain readability and simplicity. As always, "It is not daily increase but daily decrease: hack away the unessential." — Bruce Lee 😊

To reproduce and understand the issue you're addressing more clearly, a Minimum Reproducible Example (MRE) demonstrating the AMP warning context would be useful for the reviewers. If you can provide an example of the exact conditions under which the error occurs (e.g., a specific PyTorch version, configuration details, or dataset), it will aid in validation and testing.

For more information, refer to our Contributing Guide. If questions come up or further clarification is needed, feel free to add comments here. This looks like a solid and impactful improvement - thank you for contributing to the community! 🚀✨

@paraglondhe098
Copy link
Author

I have read the CLA Document and I sign the CLA

@paraglondhe098 paraglondhe098 changed the title managed the import of torch.amp to be compatible with all pytorch ver… managed the import of torch.amp to be compatible with all pytorch versions Jan 11, 2025
@paraglondhe098 paraglondhe098 changed the title managed the import of torch.amp to be compatible with all pytorch versions Fix: managed the import of torch.amp to be compatible with all pytorch versions Jan 11, 2025
@paraglondhe098 paraglondhe098 changed the title Fix: managed the import of torch.amp to be compatible with all pytorch versions Fix: Managed the import of torch.amp to be compatible with all pytorch versions Jan 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Dependencies and packages enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants